Skip to content

Take advantage of the Span<T> namespace changes#8477

Merged
andrewlock merged 12 commits intomasterfrom
andrew/use-new-span-t
Apr 24, 2026
Merged

Take advantage of the Span<T> namespace changes#8477
andrewlock merged 12 commits intomasterfrom
andrew/use-new-span-t

Conversation

@andrewlock
Copy link
Copy Markdown
Member

@andrewlock andrewlock commented Apr 17, 2026

Summary of changes

Remove some branching code that's no longer required after #8476 moved Span<T> to System namespace

Reason for change

This sort of stuff is the reason we made that change, to reduce maintenance.

Implementation details

Set 🤖 looking for possible cases, so it's not exhaustive, but gives a taster. I think most of these make sense. It's nothing outstanding but it's the little things.

Test coverage

Just a refactoring, so covered by existing tests.

Other details

By definition, we don't really expect to see performance improvements for this, other than potentially some reduced allocation in .NET Framework. The primary benefits are devx

Depends on the vendoring code stack:

Depends on a stack updating our vendored system code

@andrewlock andrewlock added type:refactor AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos labels Apr 17, 2026
@andrewlock andrewlock force-pushed the andrew/use-new-span-t branch from 6b128fd to 49790cb Compare April 17, 2026 15:05
@pr-commenter
Copy link
Copy Markdown

pr-commenter Bot commented Apr 17, 2026

Benchmarks

Benchmark execution time: 2026-04-24 12:37:36

Comparing candidate commit d1485d3 in PR branch andrew/use-new-span-t with baseline commit 8339c77 in branch master.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 27 metrics, 0 unstable metrics, 60 known flaky benchmarks, 27 flaky benchmarks without significant changes.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

Known flaky benchmarks

These benchmarks are marked as flaky and will not trigger a failure. Modify FLAKY_BENCHMARKS_REGEX to control which benchmarks are marked as flaky.

scenario:Benchmarks.Trace.ActivityBenchmark.StartStopWithChild net6.0

  • 🟩 throughput [+10086.817op/s; +12314.487op/s] or [+8.478%; +10.351%]

scenario:Benchmarks.Trace.AgentWriterBenchmark.WriteAndFlushEnrichedTraces net472

  • 🟥 execution_time [+304.548ms; +307.647ms] or [+151.128%; +152.665%]

scenario:Benchmarks.Trace.AgentWriterBenchmark.WriteAndFlushEnrichedTraces net6.0

  • 🟥 execution_time [+383.262ms; +385.443ms] or [+302.801%; +304.523%]

scenario:Benchmarks.Trace.AgentWriterBenchmark.WriteAndFlushEnrichedTraces netcoreapp3.1

  • 🟥 execution_time [+400.661ms; +403.122ms] or [+354.570%; +356.747%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleMoreComplexBody net472

  • 🟥 allocated_mem [+1.308KB; +1.308KB] or [+27.529%; +27.541%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleMoreComplexBody net6.0

  • 🟥 allocated_mem [+471 bytes; +472 bytes] or [+9.977%; +9.987%]
  • 🟩 execution_time [-15.844ms; -11.679ms] or [-7.400%; -5.454%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleMoreComplexBody netcoreapp3.1

  • 🟥 allocated_mem [+1.272KB; +1.272KB] or [+27.502%; +27.510%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleSimpleBody net472

  • 🟥 allocated_mem [+1.307KB; +1.307KB] or [+105.746%; +105.759%]
  • 🟥 throughput [-259466.480op/s; -256279.725op/s] or [-26.493%; -26.167%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleSimpleBody net6.0

  • 🟥 allocated_mem [+471 bytes; +472 bytes] or [+38.558%; +38.566%]
  • 🟩 execution_time [-25.739ms; -20.832ms] or [-11.479%; -9.290%]
  • 🟥 throughput [-70064.277op/s; -47252.083op/s] or [-7.485%; -5.048%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.AllCycleSimpleBody netcoreapp3.1

  • 🟥 allocated_mem [+1.272KB; +1.272KB] or [+105.292%; +105.304%]
  • 🟥 throughput [-137170.847op/s; -121359.242op/s] or [-19.709%; -17.437%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorMoreComplexBody net6.0

  • 🟩 throughput [+10327.157op/s; +13285.455op/s] or [+6.571%; +8.453%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorMoreComplexBody netcoreapp3.1

  • 🟩 throughput [+6942.256op/s; +9595.919op/s] or [+5.530%; +7.644%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorSimpleBody net6.0

  • 🟩 throughput [+357294.878op/s; +380334.461op/s] or [+11.914%; +12.682%]

scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorSimpleBody netcoreapp3.1

  • 🟩 execution_time [-18.193ms; -13.745ms] or [-8.386%; -6.336%]
  • 🟩 throughput [+171606.392op/s; +226009.085op/s] or [+6.812%; +8.971%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeArgs net472

  • 🟥 execution_time [+300.399ms; +300.911ms] or [+150.099%; +150.355%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeArgs net6.0

  • 🟥 execution_time [+301.248ms; +315.937ms] or [+151.920%; +159.328%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeArgs netcoreapp3.1

  • 🟥 execution_time [+299.718ms; +310.291ms] or [+150.975%; +156.301%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeLegacyArgs net472

  • 🟥 execution_time [+298.159ms; +298.826ms] or [+146.444%; +146.772%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeLegacyArgs net6.0

  • 🟥 execution_time [+294.664ms; +297.458ms] or [+144.050%; +145.416%]

scenario:Benchmarks.Trace.Asm.AppSecEncoderBenchmark.EncodeLegacyArgs netcoreapp3.1

  • 🟥 execution_time [+301.299ms; +303.168ms] or [+150.589%; +151.523%]

scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmarkWithAttack net6.0

  • 🟥 execution_time [+22.636µs; +46.345µs] or [+7.227%; +14.796%]
  • 🟥 throughput [-430.412op/s; -231.017op/s] or [-13.417%; -7.201%]

scenario:Benchmarks.Trace.AspNetCoreBenchmark.SendRequest net472

  • 🟥 execution_time [+299.916ms; +300.560ms] or [+149.689%; +150.011%]

scenario:Benchmarks.Trace.AspNetCoreBenchmark.SendRequest net6.0

  • unstable execution_time [+363.084ms; +401.360ms] or [+394.505%; +436.094%]
  • 🟩 throughput [+1123.511op/s; +1268.015op/s] or [+9.232%; +10.419%]

scenario:Benchmarks.Trace.AspNetCoreBenchmark.SendRequest netcoreapp3.1

  • unstable execution_time [+332.923ms; +360.910ms] or [+252.785%; +274.036%]
  • 🟩 throughput [+645.179op/s; +846.338op/s] or [+6.246%; +8.193%]

scenario:Benchmarks.Trace.CIVisibilityProtocolWriterBenchmark.WriteAndFlushEnrichedTraces net472

  • unstable execution_time [+325.005ms; +402.183ms] or [+149.434%; +184.920%]
  • 🟥 throughput [-512.291op/s; -453.666op/s] or [-46.419%; -41.107%]

scenario:Benchmarks.Trace.CIVisibilityProtocolWriterBenchmark.WriteAndFlushEnrichedTraces net6.0

  • unstable execution_time [+200.937ms; +334.145ms] or [+85.631%; +142.399%]
  • 🟥 throughput [-744.389op/s; -660.937op/s] or [-49.651%; -44.085%]

scenario:Benchmarks.Trace.CIVisibilityProtocolWriterBenchmark.WriteAndFlushEnrichedTraces netcoreapp3.1

  • 🟥 execution_time [+338.588ms; +346.595ms] or [+202.515%; +207.304%]
  • 🟥 throughput [-398.288op/s; -363.542op/s] or [-27.732%; -25.313%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSliceWithPool netcoreapp3.1

  • unstable throughput [-7.401op/s; +54.117op/s] or [-1.381%; +10.101%]

scenario:Benchmarks.Trace.CharSliceBenchmark.OriginalCharSlice net6.0

  • 🟩 execution_time [-195.001µs; -153.768µs] or [-9.878%; -7.789%]
  • 🟩 throughput [+45.129op/s; +56.324op/s] or [+8.909%; +11.119%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearch net472

  • 🟥 execution_time [+303.429ms; +304.585ms] or [+152.801%; +153.383%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearch net6.0

  • 🟥 execution_time [+301.129ms; +302.787ms] or [+150.896%; +151.727%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearch netcoreapp3.1

  • 🟥 execution_time [+300.453ms; +303.754ms] or [+150.935%; +152.593%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearchAsync net472

  • 🟥 execution_time [+302.392ms; +303.739ms] or [+151.851%; +152.528%]
  • 🟩 throughput [+16385.347op/s; +18053.701op/s] or [+5.489%; +6.048%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearchAsync net6.0

  • 🟥 execution_time [+298.541ms; +300.472ms] or [+147.615%; +148.570%]

scenario:Benchmarks.Trace.ElasticsearchBenchmark.CallElasticsearchAsync netcoreapp3.1

  • 🟥 execution_time [+303.123ms; +306.604ms] or [+153.636%; +155.400%]

scenario:Benchmarks.Trace.GraphQLBenchmark.ExecuteAsync net472

  • 🟥 execution_time [+301.841ms; +304.785ms] or [+151.497%; +152.975%]

scenario:Benchmarks.Trace.GraphQLBenchmark.ExecuteAsync net6.0

  • 🟥 execution_time [+300.877ms; +303.067ms] or [+149.960%; +151.051%]
  • 🟩 throughput [+30715.703op/s; +37653.700op/s] or [+6.099%; +7.477%]

scenario:Benchmarks.Trace.GraphQLBenchmark.ExecuteAsync netcoreapp3.1

  • 🟥 execution_time [+300.892ms; +303.414ms] or [+149.691%; +150.946%]

scenario:Benchmarks.Trace.ILoggerBenchmark.EnrichedLog net6.0

  • 🟩 execution_time [-16.171ms; -12.479ms] or [-7.520%; -5.803%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatAspectBenchmark net472

  • unstable execution_time [+8.794µs; +51.001µs] or [+2.172%; +12.598%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatAspectBenchmark net6.0

  • 🟩 allocated_mem [-19.639KB; -19.618KB] or [-7.164%; -7.156%]
  • unstable execution_time [-53.689µs; -2.272µs] or [-10.611%; -0.449%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatAspectBenchmark netcoreapp3.1

  • 🟩 allocated_mem [-25.753KB; -25.735KB] or [-9.388%; -9.382%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatBenchmark net6.0

  • unstable execution_time [+8.852µs; +13.213µs] or [+20.923%; +31.231%]
  • 🟥 throughput [-5653.769op/s; -3849.297op/s] or [-23.801%; -16.204%]

scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatBenchmark netcoreapp3.1

  • unstable execution_time [-14.695µs; -7.699µs] or [-22.800%; -11.944%]
  • 🟩 throughput [+1977.041op/s; +3438.375op/s] or [+12.130%; +21.096%]

scenario:Benchmarks.Trace.Log4netBenchmark.EnrichedLog net472

  • 🟥 execution_time [+299.354ms; +301.081ms] or [+151.310%; +152.183%]

scenario:Benchmarks.Trace.Log4netBenchmark.EnrichedLog net6.0

  • 🟥 execution_time [+303.732ms; +306.434ms] or [+154.599%; +155.974%]

scenario:Benchmarks.Trace.Log4netBenchmark.EnrichedLog netcoreapp3.1

  • 🟥 execution_time [+297.502ms; +299.758ms] or [+148.937%; +150.066%]

scenario:Benchmarks.Trace.RedisBenchmark.SendReceive net6.0

  • 🟩 throughput [+30897.319op/s; +34282.301op/s] or [+5.848%; +6.489%]

scenario:Benchmarks.Trace.SerilogBenchmark.EnrichedLog net472

  • 🟥 execution_time [+298.650ms; +300.346ms] or [+148.850%; +149.695%]

scenario:Benchmarks.Trace.SerilogBenchmark.EnrichedLog net6.0

  • 🟥 execution_time [+301.947ms; +303.309ms] or [+151.623%; +152.307%]

scenario:Benchmarks.Trace.SerilogBenchmark.EnrichedLog netcoreapp3.1

  • 🟥 execution_time [+303.758ms; +306.040ms] or [+154.047%; +155.204%]

scenario:Benchmarks.Trace.SingleSpanAspNetCoreBenchmark.SingleSpanAspNetCore net472

  • 🟥 execution_time [+300.043ms; +300.698ms] or [+149.663%; +149.990%]
  • 🟩 throughput [+61071816.422op/s; +61330252.266op/s] or [+44.476%; +44.664%]

scenario:Benchmarks.Trace.SingleSpanAspNetCoreBenchmark.SingleSpanAspNetCore net6.0

  • unstable execution_time [+371.051ms; +402.046ms] or [+461.469%; +500.015%]
  • 🟩 throughput [+1030.540op/s; +1193.883op/s] or [+7.967%; +9.229%]

scenario:Benchmarks.Trace.SingleSpanAspNetCoreBenchmark.SingleSpanAspNetCore netcoreapp3.1

  • 🟥 execution_time [+299.446ms; +300.322ms] or [+149.357%; +149.794%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishScope net6.0

  • 🟩 throughput [+83462.077op/s; +90942.732op/s] or [+7.793%; +8.491%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishScope netcoreapp3.1

  • 🟩 throughput [+46463.079op/s; +66806.116op/s] or [+5.378%; +7.733%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishSpan net6.0

  • 🟩 throughput [+84924.645op/s; +114677.494op/s] or [+6.573%; +8.876%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishSpan netcoreapp3.1

  • 🟩 throughput [+67045.610op/s; +75541.741op/s] or [+6.659%; +7.503%]

scenario:Benchmarks.Trace.SpanBenchmark.StartFinishTwoScopes net6.0

  • 🟩 throughput [+50087.728op/s; +54540.781op/s] or [+9.095%; +9.904%]

scenario:Benchmarks.Trace.TraceAnnotationsBenchmark.RunOnMethodBegin net6.0

  • 🟩 throughput [+96273.413op/s; +112990.163op/s] or [+10.756%; +12.624%]

Known flaky benchmarks without significant changes:

  • scenario:Benchmarks.Trace.ActivityBenchmark.StartStopWithChild net472
  • scenario:Benchmarks.Trace.ActivityBenchmark.StartStopWithChild netcoreapp3.1
  • scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorMoreComplexBody net472
  • scenario:Benchmarks.Trace.Asm.AppSecBodyBenchmark.ObjectExtractorSimpleBody net472
  • scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmark net472
  • scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmark net6.0
  • scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmark netcoreapp3.1
  • scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmarkWithAttack net472
  • scenario:Benchmarks.Trace.Asm.AppSecWafBenchmark.RunWafRealisticBenchmarkWithAttack netcoreapp3.1
  • scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSlice net472
  • scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSlice net6.0
  • scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSlice netcoreapp3.1
  • scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSliceWithPool net472
  • scenario:Benchmarks.Trace.CharSliceBenchmark.OptimizedCharSliceWithPool net6.0
  • scenario:Benchmarks.Trace.CharSliceBenchmark.OriginalCharSlice net472
  • scenario:Benchmarks.Trace.CharSliceBenchmark.OriginalCharSlice netcoreapp3.1
  • scenario:Benchmarks.Trace.ILoggerBenchmark.EnrichedLog net472
  • scenario:Benchmarks.Trace.ILoggerBenchmark.EnrichedLog netcoreapp3.1
  • scenario:Benchmarks.Trace.Iast.StringAspectsBenchmark.StringConcatBenchmark net472
  • scenario:Benchmarks.Trace.RedisBenchmark.SendReceive net472
  • scenario:Benchmarks.Trace.RedisBenchmark.SendReceive netcoreapp3.1
  • scenario:Benchmarks.Trace.SpanBenchmark.StartFinishScope net472
  • scenario:Benchmarks.Trace.SpanBenchmark.StartFinishSpan net472
  • scenario:Benchmarks.Trace.SpanBenchmark.StartFinishTwoScopes net472
  • scenario:Benchmarks.Trace.SpanBenchmark.StartFinishTwoScopes netcoreapp3.1
  • scenario:Benchmarks.Trace.TraceAnnotationsBenchmark.RunOnMethodBegin net472
  • scenario:Benchmarks.Trace.TraceAnnotationsBenchmark.RunOnMethodBegin netcoreapp3.1

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 25f6d5e446

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

{
#if NETCOREAPP
// don't allocate this inside the loop (CA2014)
Span<Guid> guidSpan = stackalloc Guid[2];
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Keep the partial-trust Guid byte path

When the net461 tracer runs in a partial-trust AppDomain, creating the shared ID generator now JITs a stackalloc/MemoryMarshal.Cast path. The removed fallback explicitly avoided unsafe-style reinterpretation because this code can be reached by manual instrumentation under partial trust; using stack allocation/reinterpretation here can fail before any trace/span IDs are generated. Keep the old Guid.ToByteArray() path for non-NETCOREAPP targets unless partial-trust support is being dropped.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We explicitly don't support partial-trust, and haven't for a long time, so I don't think this matters? 🤔

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. It must be referring to the (outdated) comments you removed below:

// we can't use `unsafe` pointers in this code because it can be called
// from manual instrumentation which could be running in partial trust.
// if we ever drop support for partial trust,
// we can rewrite this to use `unsafe` instead of allocating these arrays.

@andrewlock andrewlock force-pushed the andrew/update-vendors-8 branch from c448c23 to ed790e1 Compare April 21, 2026 10:19
@andrewlock andrewlock requested review from a team as code owners April 21, 2026 10:19
@andrewlock andrewlock force-pushed the andrew/use-new-span-t branch from 25f6d5e to b1f8279 Compare April 21, 2026 10:19
/// </summary>
/// <returns>The 64-bit FNV hash of the data, as a <c>ulong</c></returns>
public static ulong GenerateHash(ReadOnlySpan<byte> data, Version version)
public static ulong GenerateHash(Span<byte> data, Version version)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Granted I'm reviewing these PRs out of order, but I'm trying to understand why these went from ReadOnlySpan to Span and I don't think I see why

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, me neither, I think the AI was stupid and this should be fixed 👀

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed locally, just switched to ReadOnlySpan<T> (I'll push later, to avoid PR rebase stampede 😄 )

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch

@andrewlock andrewlock force-pushed the andrew/update-vendors-8 branch from ed790e1 to 0688bae Compare April 23, 2026 12:14
Comment on lines 299 to 300
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good-bye, substrings!

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More! 🎉

Copy link
Copy Markdown
Collaborator

@bouwkast bouwkast left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

I presumed that a lot of the deleted/new files are caused by a rebase or the merging of the stacked branches into this one?

I don't remember them from my initial look at this PR specifically but I may have gotten lost along the way of reviewing the stack

Base automatically changed from andrew/update-vendors-8 to master April 24, 2026 07:01
@andrewlock andrewlock force-pushed the andrew/use-new-span-t branch from b1f8279 to 9b090d5 Compare April 24, 2026 07:06
@andrewlock
Copy link
Copy Markdown
Member Author

👍

I presumed that a lot of the deleted/new files are caused by a rebase or the merging of the stacked branches into this one?

I don't remember them from my initial look at this PR specifically but I may have gotten lost along the way of reviewing the stack

Yeah, this was because I stopped rebasing the whole stack every time I merged one of the base PRs, because it was flooding CI 😅 I didn't think about the fact that it would ruin the diffs for reviewing, sorry! 🙁

andrewlock and others added 4 commits April 24, 2026 12:31
Remove the outer #if NETCOREAPP guard from ValueStringBuilder.
Now that Span<T> is in the System namespace for vendored types,
the ref struct compiles on .NET Framework too. All dependencies
(ArrayPool, MemoryMarshal, MemoryExtensions) are available via
vendored global usings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove #if NETCOREAPP around Span<byte>/ReadOnlySpan<byte> overloads
of GenerateHash, GenerateV1Hash, and GenerateV1AHash. Delegate the
byte[] private methods to the ReadOnlySpan<byte> versions to remove
code duplication. The string-accepting methods keep their #if gate
as they depend on Encoding.GetBytes(string, Span<byte>).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove #if NETCOREAPP3_1_OR_GREATER around all eight Span<byte>
overloads (WriteVarLong, WriteVarLongZigZag, WriteVarInt,
WriteVarIntZigZag, ReadVarLong, ReadVarLongZigZag, ReadVarInt,
ReadVarIntZigZag). These only index into the span and need no
.NET Core-specific BCL methods.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the two-step byte[] allocation path on .NET Framework with
a single stackalloc byte[16] + BinaryPrimitives.WriteUInt64LittleEndian
on all TFMs. BinaryPrimitives and FnvHash64 Span overloads are now
available everywhere via vendored global usings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
andrewlock and others added 8 commits April 24, 2026 12:31
Replace the byte[]-allocating Guid.ToByteArray() path with
stackalloc Guid[2] + MemoryMarshal.Cast<Guid, ulong> on all TFMs.
MemoryMarshal is available via vendored global using on .NET
Framework. Removes unused _buffer field and GetBuffer helper.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use stackalloc byte[16] for MD5 hash and stackalloc char[36] for
hex formatting on all TFMs, avoiding two intermediate array
allocations. Now uses the unified Md5Helper.ComputeMd5Hash(string,
Span<byte>) signature.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace ArrayPool<char> rent/return for 2-char hex buffer with
stackalloc char[2] on all TFMs. The chars are written by
HexString.ToHexChars and appended to a StringBuilder, so no
ToString/ToArray overhead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The conditional using for System.Buffers is already handled by
GlobalUsings.cs which imports the correct namespace (BCL or
vendored) based on the TFM.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace StringBuilderCache fallback with ValueStringBuilder and
stackalloc on all TFMs. AppendAsLowerInvariant lowercases each
segment inline, avoiding the extra string allocation from the
previous .ToLowerInvariant() call on the final result. Add a
string? overload to ValueStringBuilder.AppendAsLowerInvariant to
avoid needing explicit .AsSpan() at each call site.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Now that Span<T>/ReadOnlySpan<T> live in the System namespace on all TFMs,
the RVA-backed `static ReadOnlySpan<byte> Prop => new byte[] {...}` idiom
compiles correctly on net461/netstandard2.0 too. Collapse the NETCOREAPP /
net461 ifdef around CharToHexLookup into a single ReadOnlySpan<byte> property
and drop the now-unused SA1201 pragma block.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Delete the NETCOREAPP/net461 ifdef in TryExtract and keep only the
span-based path. Both HexString.TryParseTraceId and TryParseUInt64 already
have ReadOnlySpan<char> overloads, so the shared path works on every TFM —
saving two substring allocations per B3 single-header extraction on net461.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@andrewlock andrewlock force-pushed the andrew/use-new-span-t branch from d54f3cf to d1485d3 Compare April 24, 2026 11:31
@andrewlock andrewlock enabled auto-merge (squash) April 24, 2026 12:22
@andrewlock andrewlock disabled auto-merge April 24, 2026 12:22
@andrewlock andrewlock enabled auto-merge (squash) April 24, 2026 12:48
@andrewlock andrewlock merged commit 0068997 into master Apr 24, 2026
141 checks passed
@andrewlock andrewlock deleted the andrew/use-new-span-t branch April 24, 2026 13:17
@github-actions github-actions Bot added this to the vNext-v3 milestone Apr 24, 2026
andrewlock added a commit that referenced this pull request Apr 24, 2026
#8486)

## Summary of changes

Update the tag list generator to always use `ReadOnlySpan<byte>`
properties instead of `byte[]`

## Reason for change

After moving our vendored `Span<T>` implementation into the `System`
namespace, various optimizations open up to us, including using
`ReadOnlySpan<byte>` properties on .NET Framework instead of `static
readonly byte[]` to avoid startup costs.

## Implementation details

Replace the code generated by the generator, and update the generated
code

## Test coverage

Covered by snapshot tests and behaviour is covered by existing tests.
We'll check the benchmarks to make sure that we _don't_ see any perf
impact (there shouldn't be, impact should just be reduced startup costs)

## Other details

Depends on a stack updating our vendored system code

- #8391
- #8454
- #8455
- #8459
- #8461
- #8469
- #8476
- #8477
andrewlock added a commit that referenced this pull request Apr 27, 2026
… string literals (#8487)

## Summary of changes

Convert `Encoding.Utf8.GetBytes` calls to static constants using UTF8
string literals

## Reason for change

After moving our vendored `Span<T>` implementation into the `System`
namespace, various optimizations open up to us, including using UTF-8
string literals, to encode strings to UTF-8 at compile time instead of
at runtime. By combing with `static ReadOnlySpan<byte>` properties,
these also become zero allocation, so we get reduced overall memory
usage as well as better startup time

## Implementation details

Replace the following with utf8 string literals:
- `StringEncoding.UTF8.GetBytes`
- `Encoding.UTF8.GetBytes`
- `EncodingHelpers.UTF8NoBom.GetBytes`

## Test coverage

All covered by existing tests

## Other details

Depends on a stack updating our vendored system code

- #8391
- #8454
- #8455
- #8459
- #8461
- #8469
- #8476
- #8477
- #8486
@andrewlock andrewlock added the type:performance Performance, speed, latency, resource usage (CPU, memory) label May 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos type:performance Performance, speed, latency, resource usage (CPU, memory) type:refactor

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants